One of the most convincing arguments for full disclosure is that while the polite security researcher is waiting for the vendor to issue a patch, that vulnerability MAY have been sold and used to exploit systems, so all individuals in charge of administering a system have a right to know ALL the details so that they can protect themselves, and that right trumps all other rights.
That argument rests upon the premise that if one person found the vulnerability, it is possible for others to find it as well. The key word here is “possible”, not “likely”, or so I thought when I started writing this post. After all, vulnerabilities can be hard to find, which is a reason why products are released with vulnerabilities. How likely is it that two security researchers will find the same vulnerability?
Mathematically speaking, the chance that two successful security researchers (malicious or not) will find the same flaw is similar to the birthday problem. Let’s assume that there are X security researchers, each finding a vulnerability out of N vulnerabilities to be found. In 2006, 6560 vulnerabilities were found, and 4876 in 2005 (according to the national vulnerability database). Let’s assume that the number of vulnerabilities available to be found in a year is about 10 000; this is most surely an underestimation. I’ll assume that all of these are equally likely to be found. An additional twist on the birthday problem is that people are entering and leaving the room; not all X are present at the same time. This is because we worry about two vulnerabilities being found within the grace period given to a vendor.
If there are more successful researchers in the room than vulnerabilities, then necessarily there has been a collision. Let’s say that the grace period given to a vendor is one month, so Y = X/12. Then, there would need to be 120,000 successful security researchers for collisions to be guaranteed. For fewer researchers, the likelihood of two vulnerabilities being the same is then 1- exp(-(Y(Y-1))/2N) (c.f. Wikipedia). Let’s assume that there are 5000 successful researchers in a given year, to match the average number of vulnerabilities reported in 2005 and 2006. The probability that two researchers can find the same vulnerability over a given time period is:
Grace Period | Probability |
---|---|
1 month | 0.9998 |
1 week | 0.37 |
1 day | 0.01 |
In other words, nowadays the grace period given to a vendor should be on the order of one or two days, if we only take this risk into account. Has it always been like this?
Let’s assume that in any given year, there are twice as many vulnerabilities to be found than there are reported vulnerabilities. If we make N = 2X and fix the grace period to one week, what was the probability of collision in different years? The formula becomes 1- exp(-(X/52(X/52-1))/4X), where we take the ceiling of X/52.
Year | Vulnerabilities Reported | Probability |
---|---|---|
1988-1996 | 0 | |
1997 | 252 | 0.02 |
1998 | 246 | 0.02 |
1999 | 918 | 0.08 |
2000 | 1018 | 0.09 |
2001 | 1672 | 0.15 |
2002 | 1959 | 0.16 |
2003 | 1281 | 0.11 |
2004 | 2363 | 0.20 |
2005 | 4876 | 0.36 |
2006 | 6560 | 0.46 |
So, according to this table, a grace period of one week would have seemed an acceptable policy before 2000, perhaps fair in 2000-2003, but is now unacceptably long. These calculations are of course very approximative, but they should be useful enough to serve as guidelines. They show, much to my chagrin, that people arguing for the full and immediate disclosure of vulnerabilities may have a point.
In any case, we can’t afford, as a matter of national and international cyber-security, to let vendors idly waste time before producing patches; vendors need to take responsibility, even if the vulnerability is not publicly known. This exercise also illustrates why a patch-it-later attitude could have seemed almost excusable years ago, but not now. These figures are a serious problem for managing security with patches, as opposed to secure coding from the start: I believe that it is not feasible anymore for traditional software development processes to issue patches before the threat of malicious disclosure and exploits becomes significant. Finally, the grace period that we can afford to give vendors may be too short for them to issue patches, but that doesn’t mean it should be zero.
Note: the astute reader will remark that the above statistics is for any two vulnerabilities to match, whereas for patching we are talking about a specific vulnerability being discovered independently. The odds of that specific ocurrence are much smaller. However, we need to consider all vulnerabilities in a systematic management by patches, which reverts to the above calculations.
Vulnerability disclosure is such a painful issue. However, some people are trying to make it as painful as possible. They slap and kick people with the release of 0-day vulnerabilities, and tell them it’s for their own good. In their fantasies, sometime in the future, we’ll be thanking them. In reality, they make me feel sympathy for the vendors.
They cite disillusionment with the “responsible disclosure” process. They believe that this process forces them somehow to wait indefinitely on the pleasure of the vendor. Whereas it is true that many vendors won’t and don’t fix known issues unless they are known publicly or are threatened with a public disclosure, it bemuses me that these people are unwilling to give the vendor a chance and wait a few weeks. They use the excuse of a few bad vendors, or a few occurrences of delays in fixes, even “user smugness”, to systematically treat vendors and their clients badly. This shows recklessness, impatience, intransigence, bad judgment and lack of discernment.
I agree that reporting vulnerabilities correctly is a thankless task. Besides my previous adventure with a web application, when reporting a few vulnerabilities to CERT/CC, I received no replies ever, not even an automated receipt. It was like sending messages into a black hole. Some vendors can become defensive and unpleasant instead. However, that doesn’t provide a justification for not being gallant, and first giving an opportunity for the opposite side to behave badly. If you don’t do at least that, then you are part of the problem. As in many real life problems, the first one to use his fists is the loser.
What these security vigilantes are really doing is using as hostages the vendor’s clients, just to make an ideological point. That is, they use the threat of security exploits to coerce or intimidate vendors and society for the sake of their objectives. They believe that the ends justify the means. Blackmail is done for personal gain, so what they are doing doesn’t fit the blackmail category, and it’s more than simple bullying. Whereas the word “terrorism” has been overused and brandished too often as a scarecrow, compare the above to the definition of terrorism. I realize that using this word, even correctly, can raise a lot of objections. If you accept that a weaker form of terrorism is the replacement of physical violence with other threats, then it would be correct to call these people “small-time terrorists” (0-day pun intended). Whatever you want to call them, in my opinion they are no longer just vigilantes, and certainly not heroes. The only thing that can be said for them is, at least they didn’t try to profit directly from the disclosures.
Finally, let me make clear that I want to be informed, and I want disclosures to happen. However, I’m certain that uncivil 0-day disclosures aren’t part of the answer. There is an interesting coverage of this and related issues at C/NET.
[tags]Microsoft Vista, DRM[/tags]
Peter Gutmann, a scientist at the University of Auckland, has recently written an essay about DRM (Digital Rights Management) in the new Windows Vista OS. The essay is quite interesting, and is certainly thought-provoking. His “Executive Executive Summary” is very quotable:
The Vista Content Protection specification could very well constitute the longest suicide note in history.
Well worth reading and thinking about—I suggest you take a look.
First off, a new build of PHPSecInfo is out: Version 0.1.2, build 20061218. Here’s what’s new:
Code is now licensed under “New BSD” license. See LICENSE
Added PhpSecInfo_Test_Core_Allow_Url_Include to test for allow_url_include in PHP5.2 and above
fix bug in post_max_size check where upload_max_size value was being checked
change curl file_support test to recommend upgrading to newest version of PHP rather than disabling support in cURL for ‘file://’ protocol
removed =& calls that force pass by reference in PHP4, so as to not throw PHP5 STRICT notices. It means passing objects by value in PHP4, but this seems acceptable for our purposes (memory usage isn’t terribly high).
Fixed bug in PhpSecInfo_Test_Session_Use_Trans_Sid where wrong ini key was requested (Thanks Mark Wallert)
New, detailed README file with explanations and basic usage instructions - Now providing an md5 hash for releases
Here’s what I’m planning to do in the next few releases:
I’m also going to look into options to reformat the test result structure, so it plays more nicely with templating systems. No promises on how this will go, but we’ll see.
I was interviewed for an article, Configuration: the forgotten side of security, about proactive security. I am a big believer in proactive security. However, I do not discount the need for reactive security. In the email interview I stated the following:
I define proactive security as a method of protecting information and resources through proper design and implementation to reduce the need for reactive security measures. In contrast, reactive security is a method of remediation and correction used when your proactive security measures fail. The two are interdependent.
I was specifically asked for best practices on setting up UNIX/Linux systems. My response was to provide some generic goals for configuring systems, which surprisingly made it into the article. I avoided listing specific tasks or steps because those change over time and vary based on the systems used. I have written a security configuration guide or two in my time, so I know how quickly they become out of date. Here are the goals again:
The five basic goals of system configuration:
In all, the most exciting aspect is that I was quoted in an article alongside Prof. Saltzer. That’s good company to have.